In total we have data from 2303 participants. Of those 2298 participants finished the experiment or failed at the captcha stage (i.e., did not abandon the study throughout). Of those, 2278 solved two or more captchas and proceeded to the roulette part. All following analyses are based on these 2278 participants.
The distribution of conditions is as follows (prop = proportion):
## # A tibble: 3 × 3
## expt_cond n prop
## <fct> <int> <dbl>
## 1 No Message 755 0.331
## 2 Message 1 796 0.349
## 3 Message 2 727 0.319
Participants with and without experience in online roulette:
## # A tibble: 2 × 3
## roulette n prop
## <fct> <int> <dbl>
## 1 Yes 1786 0.784
## 2 No 492 0.216
Some demographic information:
## # A tibble: 5 × 3
## gender n prop
## <chr> <int> <dbl>
## 1 Female 965 0.424
## 2 Male 1289 0.566
## 3 Non-binary 15 0.00658
## 4 None 5 0.00219
## 5 Prefer Not to Disclose 4 0.00176
## vars n mean sd median trimmed mad min max range skew kurtosis se
## age 1 2276 35.94 10.98 34 34.99 10.38 18 87.0 69.0 0.82 0.47 0.23
## bonus 2 2278 4.91 3.45 5 4.79 1.04 0 79.4 79.4 7.56 123.73 0.07
## bet_count 3 2278 5.97 12.94 3 3.39 4.45 0 198.0 198.0 7.00 70.84 0.27
Some conditional demographic information:
## # A tibble: 6 × 4
## gender expt_cond n prop
## <chr> <fct> <int> <dbl>
## 1 Female No Message 318 0.140
## 2 Female Message 1 329 0.144
## 3 Female Message 2 318 0.140
## 4 Male No Message 427 0.187
## 5 Male Message 1 462 0.203
## 6 Male Message 2 400 0.176
##
## Descriptive statistics by group
## expt_cond: No Message
## vars n mean sd median trimmed mad min max range skew kurtosis se
## expt_cond* 1 755 1.00 0.00 1 1.00 0.00 1 1.0 0.0 NaN NaN 0.00
## age 2 754 35.82 10.71 34 34.82 9.64 18 81.0 63.0 0.92 0.82 0.39
## bonus 3 755 4.98 4.18 5 4.79 1.48 0 79.4 79.4 9.52 147.49 0.15
## bet_count 4 755 6.22 13.10 3 3.57 4.45 0 187.0 187.0 6.64 65.17 0.48
## ----------------------------------------------------------------------------------
## expt_cond: Message 1
## vars n mean sd median trimmed mad min max range skew kurtosis se
## expt_cond* 1 796 2.00 0.00 2 2.00 0.00 2 2.0 0.0 NaN NaN 0.00
## age 2 795 36.20 11.12 34 35.28 10.38 19 87.0 68.0 0.79 0.31 0.39
## bonus 3 796 4.95 3.43 5 4.81 0.74 0 46.6 46.6 5.20 50.70 0.12
## bet_count 4 796 5.57 10.36 3 3.33 4.45 0 104.0 104.0 4.67 28.74 0.37
## ----------------------------------------------------------------------------------
## expt_cond: Message 2
## vars n mean sd median trimmed mad min max range skew kurtosis se
## expt_cond* 1 727 3.00 0.00 3 3.00 0.00 3 3 0 NaN NaN 0.00
## age 2 727 35.79 11.10 34 34.85 10.38 18 78 60 0.77 0.29 0.41
## bonus 3 727 4.80 2.52 5 4.78 0.89 0 20 20 1.28 7.20 0.09
## bet_count 4 727 6.14 15.15 3 3.30 4.45 0 198 198 7.56 72.68 0.56
Now let’s take a look at the PGSI scores:
## # A tibble: 1 × 2
## pgsi_mean pgsi_sd
## <dbl> <dbl>
## 1 2.49 3.76
## # A tibble: 3 × 3
## expt_cond pgsi_mean pgsi_sd
## <fct> <dbl> <dbl>
## 1 No Message 2.51 3.90
## 2 Message 1 2.45 3.76
## 3 Message 2 2.52 3.61
Next let’s take a look at our main DV, proportion of money bet, which is defined as follows: \[\texttt{prop_bet} = \frac{\texttt{amount}}{5 + \texttt{total_win}}\]
## # A tibble: 3 × 3
## expt_cond prop_bet_mean prop_bet_sd
## <fct> <dbl> <dbl>
## 1 No Message 0.344 0.322
## 2 Message 1 0.323 0.321
## 3 Message 2 0.316 0.319
Our DV clearly does not look normally distributed.
## # A tibble: 1 × 3
## gamble_at_all gamble_everything proportion_bet_rest
## <dbl> <dbl> <dbl>
## 1 0.746 0.122 0.361
## # A tibble: 1 × 3
## no_gamble gamble_at_all gamble_everything
## <int> <int> <int>
## 1 579 1699 208
Binomial confidence or credibility intervals for the probability to gamble at all:
## method x n mean lower upper
## 1 agresti-coull 1699 2278 0.7458297 0.7275419 0.7632898
## 2 asymptotic 1699 2278 0.7458297 0.7279503 0.7637091
## 3 bayes 1699 2278 0.7457218 0.7277920 0.7635307
## 4 cloglog 1699 2278 0.7458297 0.7274299 0.7631969
## 5 exact 1699 2278 0.7458297 0.7274217 0.7636032
## 6 logit 1699 2278 0.7458297 0.7275397 0.7632913
## 7 probit 1699 2278 0.7458297 0.7276259 0.7633743
## 8 profile 1699 2278 0.7458297 0.7276804 0.7634263
## 9 lrt 1699 2278 0.7458297 0.7276681 0.7634233
## 10 prop.test 1699 2278 0.7458297 0.7273225 0.7634990
## 11 wilson 1699 2278 0.7458297 0.7275467 0.7632850
Distribution per condition:
We use a custom parameterization of a zero-one-inflated beta-regression model (see also here). The likelihood of the model is given by:
\[\begin{align} f(y) &= (1 - g) & & \text{if } y = 0 \\ f(y) &= g \times e & & \text{if } y = 1 \\ f(y) &= g \times (1 - e) \times \text{Beta}(a,b) & & \text{if } y \notin \{0, 1\} \\ a &= \mu \times \phi \\ b &= (1-\mu) \times \phi \end{align}\]
Where \(1 - g\) is the zero inflation probability, zipp is \(g\) and reflects the probability to gamble, \(e\) is the conditional one-inflation probability (coi) or conditional probability to gamble everything (i.e., conditional probability to have a value of one, if one gambles), \(\mu\) is the mean of the beta distribution (Intercept), and \(\phi\) is the precision of the beta distribution (phi). As we use Stan for modelling, we need to model on the real line and need appropriate link functions. For \phi the link is log (inverse is exp()), for all other parameters it is logit (inverse is plogis()).
We fit this model and add experimental condition as a factor to the three main model parameters (i.e., only the precision parameter is fixed across conditions). The following table provides the overview of the model and all model parameters and show good convergence.
## Family: zoib2
## Links: mu = logit; phi = log; zipp = logit; coi = logit
## Formula: prop_bet ~ 0 + Intercept + expt_cond
## phi ~ 1
## zipp ~ 0 + Intercept + expt_cond
## coi ~ 0 + Intercept + expt_cond
## Data: duse (Number of observations: 2278)
## Draws: 4 chains, each with iter = 26000; warmup = 1000; thin = 1;
## total post-warmup draws = 1e+05
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## phi_Intercept 1.18 0.03 1.11 1.24 1.00 101603 73958
## Intercept -0.45 0.04 -0.53 -0.37 1.00 70041 69931
## expt_condMessage1 -0.10 0.06 -0.22 0.02 1.00 76963 75189
## expt_condMessage2 -0.11 0.06 -0.23 0.01 1.00 78260 75189
## zipp_Intercept 1.12 0.08 0.95 1.29 1.00 72444 72941
## zipp_expt_condMessage1 -0.10 0.12 -0.33 0.12 1.00 80318 76790
## zipp_expt_condMessage2 -0.01 0.12 -0.25 0.22 1.00 81310 78410
## coi_Intercept -1.99 0.13 -2.25 -1.74 1.00 69064 67156
## coi_expt_condMessage1 0.07 0.18 -0.28 0.42 1.00 77152 73396
## coi_expt_condMessage2 -0.04 0.19 -0.40 0.32 1.00 79151 77035
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
As a visual convergence check, we plot the density and trace plots for the four intercept parameters representing the no message (control) condition or the overall mean (for phi).
The model does not have any obvious problems, even without priors for the condition specific effects.
As expected the synthetic data generated from the model looks a lot like the actual data. This suggests that the model is adequate for the data.
Our hypothesis is a bout proportion bet, \(Pr_{bet}\) which is given by:
\[Pr_{bet} = (g * e) + (g * (1-e) * \mu)\]
The following show the resulting \(Pr_{bet}\) posterior distributions across conditions.
## # A tibble: 3 × 7
## expt_cond prop_bet .lower .upper .width .point .interval
## <fct> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 No Message 0.349 0.326 0.373 0.95 mean qi
## 2 Message 1 0.328 0.306 0.351 0.95 mean qi
## 3 Message 2 0.329 0.306 0.352 0.95 mean qi
## # A tibble: 2 × 7
## expt_cond prop_bet .lower .upper .width .point .interval
## <chr> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 Message 1 - No Message -0.0210 -0.0532 0.0112 0.95 mean qi
## 2 Message 2 - No Message -0.0201 -0.0528 0.0126 0.95 mean qi
Mu:
## expt_cond response lower.HPD upper.HPD
## No Message 0.389 0.369 0.410
## Message 1 0.366 0.346 0.385
## Message 2 0.364 0.344 0.384
##
## Point estimate displayed: median
## Results are back-transformed from the logit scale
## HPD interval probability: 0.95
g:
## expt_cond response lower.HPD upper.HPD
## No Message 0.754 0.723 0.784
## Message 1 0.734 0.703 0.765
## Message 2 0.751 0.720 0.782
##
## Point estimate displayed: median
## Results are back-transformed from the logit scale
## HPD interval probability: 0.95
e:
## expt_cond response lower.HPD upper.HPD
## No Message 0.121 0.0943 0.148
## Message 1 0.128 0.1017 0.156
## Message 2 0.117 0.0907 0.144
##
## Point estimate displayed: median
## Results are back-transformed from the logit scale
## HPD interval probability: 0.95
## Family: zoib2
## Links: mu = logit; phi = log; zipp = logit; coi = logit
## Formula: prop_bet ~ expt_cond + pgsi_c
## phi ~ 1
## zipp ~ expt_cond + pgsi_c
## coi ~ expt_cond + pgsi_c
## Data: duse (Number of observations: 2278)
## Draws: 4 chains, each with iter = 26000; warmup = 1000; thin = 1;
## total post-warmup draws = 1e+05
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -0.45 0.04 -0.54 -0.37 1.00 158294 81394
## phi_Intercept 1.18 0.03 1.12 1.25 1.00 190159 79987
## zipp_Intercept 1.14 0.09 0.97 1.30 1.00 145528 79286
## coi_Intercept -2.04 0.13 -2.31 -1.79 1.00 151936 80238
## expt_condMessage1 -0.10 0.06 -0.22 0.02 1.00 151131 86259
## expt_condMessage2 -0.11 0.06 -0.23 0.01 1.00 153529 87485
## pgsi_c 0.02 0.01 0.01 0.03 1.00 233587 75151
## zipp_expt_condMessage1 -0.10 0.12 -0.33 0.13 1.00 142204 88368
## zipp_expt_condMessage2 -0.02 0.12 -0.25 0.22 1.00 139993 85338
## zipp_pgsi_c 0.07 0.02 0.04 0.10 1.00 210893 75776
## coi_expt_condMessage1 0.06 0.18 -0.29 0.42 1.00 149288 84998
## coi_expt_condMessage2 -0.04 0.19 -0.41 0.32 1.00 146776 83772
## coi_pgsi_c 0.09 0.02 0.06 0.12 1.00 202813 78284
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
As a visual convergence check, we plot the density and trace plots for the four intercept pasrameters representing the no message condition or the overall mean (for phi).
Let’s then take a look at the difference distribution of proportion bet after adjusting for PGSI:
## Joining, by = c("expt_cond", ".chain", ".iteration", ".draw")
## Joining, by = c("expt_cond", ".chain", ".iteration", ".draw")
## # A tibble: 2 × 7
## expt_cond prop_bet .lower .upper .width .point .interval
## <chr> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 Message 1 - No Message -0.0210 -0.0528 0.0108 0.95 mean qi
## 2 Message 2 - No Message -0.0204 -0.0525 0.0118 0.95 mean qi
## Family: zoib2
## Links: mu = logit; phi = log; zipp = logit; coi = logit
## Formula: prop_bet ~ expt_cond * pgsi_c
## phi ~ 1
## zipp ~ expt_cond * pgsi_c
## coi ~ expt_cond * pgsi_c
## Data: duse (Number of observations: 2278)
## Draws: 4 chains, each with iter = 26000; warmup = 1000; thin = 1;
## total post-warmup draws = 1e+05
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -0.45 0.04 -0.54 -0.37 1.00 118407 77245
## phi_Intercept 1.19 0.03 1.12 1.25 1.00 133666 77841
## zipp_Intercept 1.13 0.09 0.96 1.30 1.00 113916 75047
## coi_Intercept -2.02 0.13 -2.29 -1.77 1.00 117269 76658
## expt_condMessage1 -0.10 0.06 -0.22 0.02 1.00 118768 80008
## expt_condMessage2 -0.11 0.06 -0.23 0.01 1.00 120165 82565
## pgsi_c -0.00 0.01 -0.03 0.02 1.00 91690 77041
## expt_condMessage1:pgsi_c 0.02 0.02 -0.01 0.05 1.00 97225 82206
## expt_condMessage2:pgsi_c 0.05 0.02 0.02 0.08 1.00 101253 81627
## zipp_expt_condMessage1 -0.08 0.12 -0.31 0.15 1.00 115770 80347
## zipp_expt_condMessage2 0.00 0.12 -0.24 0.24 1.00 114326 80592
## zipp_pgsi_c 0.04 0.02 -0.01 0.09 1.00 90899 71179
## zipp_expt_condMessage1:pgsi_c 0.05 0.04 -0.03 0.12 1.00 97514 79197
## zipp_expt_condMessage2:pgsi_c 0.05 0.04 -0.03 0.12 1.00 99393 78084
## coi_expt_condMessage1 0.03 0.19 -0.33 0.39 1.00 116610 80804
## coi_expt_condMessage2 -0.09 0.19 -0.47 0.29 1.00 114635 81389
## coi_pgsi_c 0.06 0.03 0.01 0.12 1.00 83995 71052
## coi_expt_condMessage1:pgsi_c 0.03 0.04 -0.05 0.10 1.00 90975 77564
## coi_expt_condMessage2:pgsi_c 0.04 0.04 -0.04 0.12 1.00 92052 79568
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
As a visual convergence check, we plot the density and trace plots for the four intercept pasrameters representing the no message condition or the overall mean (for phi).
Let’s then take a look at the difference distribution of proportion bet after adjusting for PGSI:
## # A tibble: 2 × 7
## expt_cond prop_bet .lower .upper .width .point .interval
## <chr> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 Message 1 - No Message -0.0209 -0.0529 0.0113 0.95 mean qi
## 2 Message 2 - No Message -0.0211 -0.0536 0.0114 0.95 mean qi
Let’s begin with some simple descriptive statistics of the clicks on the GamCare page.
## # A tibble: 3 × 5
## expt_cond proportion sd success n
## <fct> <dbl> <dbl> <int> <int>
## 1 No Message 0.0278 0.165 21 755
## 2 Message 1 0.0289 0.168 23 796
## 3 Message 2 0.0248 0.155 18 727
Model shows no obvious convergence problems:
## Family: bernoulli
## Links: mu = logit
## Formula: gamcare_click ~ expt_cond
## Data: duse (Number of observations: 2278)
## Draws: 4 chains, each with iter = 26000; warmup = 1000; thin = 1;
## total post-warmup draws = 1e+05
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -3.57 0.22 -4.04 -3.15 1.00 63730 55878
## expt_condMessage1 0.04 0.31 -0.56 0.65 1.00 68122 65213
## expt_condMessage2 -0.12 0.33 -0.77 0.52 1.00 67079 66781
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Let’s take a look at the predicted probabilities and differences
## expt_cond response lower.HPD upper.HPD
## No Message 0.0276 0.0166 0.0398
## Message 1 0.0287 0.0181 0.0410
## Message 2 0.0245 0.0140 0.0363
##
## Point estimate displayed: median
## Results are back-transformed from the logit scale
## HPD interval probability: 0.95
## # A tibble: 2 × 7
## expt_cond prob .lower .upper .width .point .interval
## <chr> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 Message 1 - No Message 0.00108 -0.0155 0.0176 0.95 mean qi
## 2 Message 2 - No Message -0.00303 -0.0195 0.0134 0.95 mean qi
Now the results figure:
There are total of 13590 bets. If we remove the first bet of each participant, 11891 bets remain. Of those, 29 (0.24%) bets took longer than 120 seconds. Following our pre-registration, we remove these betting times from analysis.
The following histogram shows the distribution of betting times.
We can also take a look at some descriptive statistics of the distribution:
## # A tibble: 1 × 3
## time_mean time_median time_sd
## <dbl> <dbl> <dbl>
## 1 9.69 5.85 11.2
## # A tibble: 3 × 4
## expt_cond time_mean time_median time_sd
## <fct> <dbl> <dbl> <dbl>
## 1 No Message 9.30 5.90 9.87
## 2 Message 1 9.27 5.39 11.5
## 3 Message 2 10.5 6.29 12.1
We analyse the betting times shown above using a shifted-lognormal model with by-participant random intercepts for the log-mean allowing both log-mean and log-SD to vary across message conditions. The following shows the model summary (which show no obvious convergence problems).
## Family: shifted_lognormal
## Links: mu = identity; sigma = log; ndt = identity
## Formula: time ~ expt_cond + (1 | ppt_id)
## sigma ~ expt_cond
## Data: times_use2 (Number of observations: 11862)
## Draws: 10 chains, each with iter = 2001; warmup = 334; thin = 3;
## total post-warmup draws = 5557
##
## Group-Level Effects:
## ~ppt_id (Number of levels: 1418)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.68 0.02 0.65 0.72 1.00 8397 12304
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 1.89 0.04 1.81 1.96 1.00 7194 10682
## sigma_Intercept -0.24 0.01 -0.26 -0.21 1.00 15694 16073
## expt_condMessage1 -0.05 0.05 -0.15 0.05 1.00 7893 12591
## expt_condMessage2 0.03 0.05 -0.07 0.13 1.00 7540 10982
## sigma_expt_condMessage1 0.05 0.02 0.02 0.09 1.00 15624 15843
## sigma_expt_condMessage2 0.03 0.02 -0.00 0.06 1.00 17097 15251
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## ndt 0.64 0.01 0.61 0.66 1.00 16950 15538
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
The summary table shows that one of the message specific parameters for the log-SD provides evidence for a difference between the no message and message condition (the 95% CI for sigma_expt_condMessage1 does not include 0).
The model shows no obvious convergence problems:
The model is also able to adequately reproduce the shape of the observed data.
Our hypothesis is about the mean betting time, which we need to calculate from the model parameters log-mean m and log-SD sigma as mean = exp(m + sigma^2/2).
The following table shows the predicted mean betting times which are similar to the observed ones and reproduce the ordering of conditions means.
## # A tibble: 3 × 7
## expt_cond mean .lower .upper .width .point .interval
## <fct> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 No Message 9.02 8.36 9.70 0.95 mean qi
## 2 Message 1 8.86 8.23 9.53 0.95 mean qi
## 3 Message 2 9.48 8.77 10.2 0.95 mean qi
We can also take a look at the differences from the no message condition:
## # A tibble: 2 × 7
## expt_cond mean .lower .upper .width .point .interval
## <chr> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 Message 1 - No Message -0.154 -1.08 0.777 0.95 mean qi
## 2 Message 2 - No Message 0.459 -0.530 1.44 0.95 mean qi
Now the results figure:
The following histograms shows the distribution of the number of spins.
Following the preregistration, we analyse the distribution after excluding all observations with 0 spins. We then use a negative binomial model to describe the data.
This model shows no obvious convergenve problems.
## Family: negbinomial
## Links: mu = log; shape = identity
## Formula: bet_count | trunc(lb = 1) ~ expt_cond
## Data: part_nozero (Number of observations: 1699)
## Draws: 4 chains, each with iter = 26000; warmup = 1000; thin = 1;
## total post-warmup draws = 1e+05
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 1.43 0.11 1.19 1.63 1.00 47585 46175
## expt_condMessage1 -0.11 0.10 -0.30 0.09 1.00 58678 60277
## expt_condMessage2 -0.01 0.10 -0.21 0.19 1.00 58545 59904
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## shape 0.25 0.03 0.18 0.31 1.00 45995 44707
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
The data seems to be well described by the model.
When we zoom in (i.e., ignore data points above 50 for the plot), we can see the that the real and synthetic data match quite well.
Then let’s take a look at the predicted number of spins:
## # A tibble: 3 × 7
## expt_cond mean .lower .upper .width .point .interval
## <fct> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 No Message 4.19 3.30 5.11 0.95 median qi
## 2 Message 1 3.76 2.96 4.59 0.95 median qi
## 3 Message 2 4.15 3.26 5.07 0.95 median qi
## # A tibble: 2 × 7
## expt_cond mean .lower .upper .width .point .interval
## <chr> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 Message 1 - No Message -0.427 -1.22 0.343 0.95 median qi
## 2 Message 2 - No Message -0.0461 -0.874 0.793 0.95 median qi
Now the results figure:
## R version 4.1.3 (2022-03-10)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 20.04.4 LTS
##
## Matrix products: default
## BLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.9.0
## LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.9.0
##
## locale:
## [1] LC_CTYPE=en_GB.UTF-8 LC_NUMERIC=C LC_TIME=en_GB.UTF-8
## [4] LC_COLLATE=en_GB.UTF-8 LC_MONETARY=en_GB.UTF-8 LC_MESSAGES=en_GB.UTF-8
## [7] LC_PAPER=en_GB.UTF-8 LC_NAME=C LC_ADDRESS=C
## [10] LC_TELEPHONE=C LC_MEASUREMENT=en_GB.UTF-8 LC_IDENTIFICATION=C
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] binom_1.1-1 emmeans_1.7.2 tidybayes_3.0.2 brms_2.16.3 Rcpp_1.0.8 forcats_0.5.1
## [7] stringr_1.4.0 dplyr_1.0.8 purrr_0.3.4 readr_2.1.2 tidyr_1.2.0 tibble_3.1.6
## [13] ggplot2_3.3.5 tidyverse_1.3.1
##
## loaded via a namespace (and not attached):
## [1] readxl_1.3.1 backports_1.4.1 plyr_1.8.6 igraph_1.2.11 svUnit_1.0.6
## [6] splines_4.1.3 crosstalk_1.2.0 TH.data_1.1-0 rstantools_2.1.1 inline_0.3.19
## [11] digest_0.6.29 htmltools_0.5.2 rsconnect_0.8.25 fansi_1.0.2 magrittr_2.0.1
## [16] checkmate_2.0.0 tzdb_0.2.0 modelr_0.1.8 RcppParallel_5.1.5 matrixStats_0.61.0
## [21] vroom_1.5.7 xts_0.12.1 sandwich_3.0-1 prettyunits_1.1.1 colorspace_2.0-2
## [26] rvest_1.0.2 ggdist_3.1.1 haven_2.4.3 xfun_0.29 callr_3.7.0
## [31] crayon_1.5.0 jsonlite_1.7.3 lme4_1.1-28 survival_3.2-13 zoo_1.8-9
## [36] glue_1.6.1 gtable_0.3.0 distributional_0.3.0 pkgbuild_1.3.1 rstan_2.21.3
## [41] abind_1.4-5 scales_1.1.1 mvtnorm_1.1-3 DBI_1.1.2 miniUI_0.1.1.1
## [46] xtable_1.8-4 diffobj_0.3.5 tmvnsim_1.0-2 bit_4.0.4 stats4_4.1.3
## [51] StanHeaders_2.21.0-7 DT_0.20 htmlwidgets_1.5.4 httr_1.4.2 threejs_0.3.3
## [56] arrayhelpers_1.1-0 posterior_1.2.0 ellipsis_0.3.2 pkgconfig_2.0.3 loo_2.4.1
## [61] farver_2.1.0 sass_0.4.0 dbplyr_2.1.1 utf8_1.2.2 labeling_0.4.2
## [66] tidyselect_1.1.1 rlang_1.0.2 reshape2_1.4.4 later_1.3.0 munsell_0.5.0
## [71] cellranger_1.1.0 tools_4.1.3 cli_3.2.0 generics_0.1.2 broom_0.7.12
## [76] ggridges_0.5.3 evaluate_0.14 fastmap_1.1.0 yaml_2.2.2 bit64_4.0.5
## [81] processx_3.5.2 knitr_1.37 fs_1.5.2 nlme_3.1-155 mime_0.12
## [86] projpred_2.0.2 xml2_1.3.3 compiler_4.1.3 bayesplot_1.8.1 shinythemes_1.2.0
## [91] rstudioapi_0.13 gamm4_0.2-6 reprex_2.0.1 bslib_0.3.1 stringi_1.7.6
## [96] highr_0.9 ps_1.6.0 Brobdingnag_1.2-7 lattice_0.20-45 Matrix_1.4-1
## [101] psych_2.1.9 nloptr_2.0.0 markdown_1.1 shinyjs_2.1.0 tensorA_0.36.2
## [106] vctrs_0.3.8 pillar_1.7.0 lifecycle_1.0.1 jquerylib_0.1.4 bridgesampling_1.1-2
## [111] estimability_1.3 cowplot_1.1.1 httpuv_1.6.5 R6_2.5.1 promises_1.2.0.1
## [116] gridExtra_2.3 codetools_0.2-18 boot_1.3-28 colourpicker_1.1.1 MASS_7.3-55
## [121] gtools_3.9.2 assertthat_0.2.1 withr_2.4.3 mnormt_2.0.2 shinystan_2.5.0
## [126] multcomp_1.4-18 mgcv_1.8-39 parallel_4.1.3 hms_1.1.1 grid_4.1.3
## [131] coda_0.19-4 minqa_1.2.4 rmarkdown_2.11 shiny_1.7.1 lubridate_1.8.0
## [136] base64enc_0.1-3 dygraphs_1.1.1.6